Current:Home > ContactFake robocalls. Doctored videos. Why Facebook is being urged to fix its election problem. -TrueNorth Finance Path
Fake robocalls. Doctored videos. Why Facebook is being urged to fix its election problem.
View
Date:2025-04-17 06:55:00
As the nation heads into the 2024 presidential election, the independent body that reviews Meta’s content moderation decisions is urging the tech giant to overhaul its policy on manipulated videos to encompass fake or distorted clips that can mislead voters and tamper with elections.
The test case was a doctored video of President Joe Biden that appeared on Facebook last May.
Meta currently bans video clips that have been digitally created or altered with generative artificial intelligence to make it appear as if people have said something they did not. But it doesn't address cruder clips − so-called "cheap fakes − made with basic editing tools, nor does it address clips that show someone doing something they did not.
The Oversight Board upheld Meta's decision to allow the Biden video to remain on Facebook but called on Meta to crack down on all doctored content, regardless of how it was created or altered. It also recommended that Meta clearly define the aim of its policy to encompass election interference.
Of particular concern is faked audio, which the board said is “one of the most potent forms of electoral disinformation we’re seeing around the world.”
In January, a fake robocall used Biden's voice to encourage New Hampshire voters to skip the primary. The robocall was artificially generated and is being probed by the New Hampshire Attorney General's Office as an attempt at voter suppression. It had no effect on the outcome of the primary – Biden won in a landslide – but it illustrated how generative AI could be used to influence an election, critics say.
“As it stands, the policy makes little sense,” Oversight Board Co-Chair Michael McConnell said in a statement. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”
Meta did not say whether it would follow the Oversight Board’s guidance. A spokesman said the company was reviewing the recommendations and would respond publicly within 60 days.
Even if Meta makes changes to its manipulated media policy, observers say there's no guarantee it will put enough money and resources into enforcing the changes.
“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”
Meta defended its election integrity policies.
“We have around 40,000 people globally working on safety and security, and protecting the 2024 elections is one of our top priorities," the company said in a statement. "Our integrity efforts continue to lead the industry and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats.”
In the first AI election, 'a tsunami of disinformation'
The stakes are not just high in the United States. In 2024, more people will have a chance to vote than in any previous election, increasing the likelihood that AI will play a role at the ballot box. And that's raising concerns.
With rapid advances in technology and too little oversight from the government or private sector, election experts have been bracing for the malicious use of deepfakes in the 2024 presidential contest. Virtually anyone can now create or digitally alter images and clips in realistic ways to deceive voters.
Like other technology companies, Meta has made pledges to curb the harms of generative AI. Yet, even as the technology grows more sophisticated, powerful and ubiquitous, there are still very few rules governing its use.
In the case of the doctored video, the original footage showed Biden accompanying his granddaughter for her first time voting in October 2022. Biden placed an “I voted” sticker near her neckline as she instructed then kissed her on the cheek. But the looped version made it seem as if Biden were repeatedly touching her chest. The caption labeled Biden a “sick pedophile.”
Meta left the video up, saying it did not violate its rules because it was not altered using AI and did not show Biden saying something he did not say. The company made a similar decision in 2019 over a clip that was slowed down to make then-House Speaker Nancy Pelosi appear drunk −even as Democrats fumed.
Biden’s 2024 campaign has set up a deepfake task force to respond to misleading AI-generated falsehoods and propaganda.
“There is going to be a tsunami of disinformation in 2024. We are already seeing it, and it is going to get much worse,” said Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution. “People are anticipating that this will be a close election, and anything that shifts 50,000 votes in three or four states could be decisive.”
How Facebook and other social media platforms police faked content
What’s alarming to West is the tepid response from social media platforms that host this content.
Rather than strengthening protections, Meta and other major technology companies have loosened their misinformation policies and laid off staffers charged with policing lies and propaganda since the 2020 election, West said.
Meta also now allows political ads to question the legitimacy of the 2020 U.S. presidential election. It does not allow ads that question the legitimacy of current or upcoming elections.
“So at a time when fake videos are becoming rampant, their capacity to deal with it is quite limited,” West said.
When policing fake election content, social media platforms can take it down, slap warning labels on it or demote it.
To ensure the policy is "proportionate," the Oversight Board recommended that Meta stop removing manipulated media when there is no other policy violation and instead apply a label warning the content has been significantly altered and may be misleading.
It also discouraged Meta from demoting content that fact-checkers identify as altered or fake without informing users or providing an appeals process.
“Political speech must be unwaveringly protected. This sometimes includes claims that are disputed and even false, but not demonstrably harmful,” McConnell said.
Facebook not doing enough to protect elections, critics charge
Hany Farid, a UC Berkeley professor who specializes in deepfakes and disinformation, gets daily inquiries about fake images on the internet, from Biden in military fatigues in the situation room to Trump with pedophile Jeffrey Epstein. He says the use of warning labels for this kind of malicious content is "cowardly."
While the warning labels provide cover to Facebook, the average person doesn’t care about the label or ignores it, he said. Most of the time those labels are not added until a video has gotten millions of views. What’s more, anyone can then take that video and post it somewhere else without the label.
According to Farid, Facebook, whose algorithms serve up content that stirs strong emotions, has been on the wrong side of this issue for the last 15 years.
“It’s hard to take Facebook seriously when they say we have these policies and it’s clear those policies are in place to maximize their profits,” he said.
Election experts call for deepfake regulations
Any efforts by social media companies to rein in doctored or AI-generated content should be paired with thoughtful standards crafted by regulators and policymakers, says Daniel Weiner, director of the Brennan Center’s Elections and Government Program.
While AI-generated depictions of Biden are quickly debunked, what about a local candidate for city council or the school board?
Last year, Sen. Richard Blumenthal, D-Conn., launched a Senate Judiciary Committee hearing into the potential dangers of deepfakes by playing an AI-generated recording that mimicked his voice and read a ChatGPT-generated script.
“The latest advances in AI technology, more than anything else, has reinforced the need to strengthen fundamental guardrails for our political system,” Weiner said. “These problems existed before. They would exist if every deepfake disappeared tomorrow. And, a lot of times, the solutions aren’t AI-specific. They are about the need for a broader strengthening of democracy.”
veryGood! (118)
Related
- US appeals court rejects Nasdaq’s diversity rules for company boards
- A Dutch Approach To Cutting Carbon Emissions From Buildings Is Coming To America
- Zayn Malik Teases Recording Studio Session in Rare Photo
- Nearly 2 In 3 Americans Are Dealing With Dangerous Heat Waves
- Global Warming Set the Stage for Los Angeles Fires
- Gunmen kidnap more than a dozen police employees in southern Mexico
- Water's Cheap... Should It Be?
- JoJo Siwa Teases New Romance in Message About Her “Happy Feelings”
- Toyota to invest $922 million to build a new paint facility at its Kentucky complex
- Get the Details Behind a Ted Lasso Star's Next Big TV Role
Ranking
- Are Instagram, Facebook and WhatsApp down? Meta says most issues resolved after outages
- Scientists Are Racing To Save Sequoias
- Michael K. Williams Death Investigation: Man Pleads Guilty in Connection With Actor's Overdose
- Our Future On A Hotter Planet Means More Climate Disasters Happening Simultaneously
- 'Squid Game' without subtitles? Duolingo, Netflix encourage fans to learn Korean
- Hundreds arrested as France rocked by third night of fiery protests over fatal police shooting of teen
- Police fatally shoot 17-year-old delivery driver, sparking condemnation by French president: Inexplicable and inexcusable
- Aerial Photos Show A Miles-Long Black Slick In Water Near A Gulf Oil Rig After Ida
Recommendation
South Korean president's party divided over defiant martial law speech
Pregnant Jessie J Claps Back at Haters Calling Her Naked Photo “Inappropriate”
Amanda Little: What Is The Future Of Our Food?
Scientists Are Racing To Save Sequoias
Meta donates $1 million to Trump’s inauguration fund
Kelly Clarkson Seemingly Shades Ex Brandon Blackstock in New Song Teaser
See Austin Butler and Kaia Gerber’s Sweet PDA Once Upon a Time in Hollywood
The Biden Administration Is Adding Worker Protections To Address Extreme Heat